With the drive to create a decentralized digital economy, Web 3.0 has become a cornerstone of digital transformation, developed on the basis of computing-force networking, distributed data storage, and blockchain. With the rapid realization of quantum devices, Web 3.0 is being developed in parallel with the deployment of quantum cloud computing and quantum Internet. In this regard, quantum computing first disrupts the original cryptographic systems that protect data security while reshaping modern cryptography with the advantages of quantum computing and communication. Therefore, in this paper, we introduce a quantum blockchain-driven Web 3.0 framework that provides information-theoretic security for decentralized data transferring and payment transactions. First, we present the framework of quantum blockchain-driven Web 3.0 with future-proof security during the transmission of data and transaction information. Next, we discuss the potential applications and challenges of implementing quantum blockchain in Web 3.0. Finally, we describe a use case for quantum non-fungible tokens (NFTs) and propose a quantum deep learning-based optimal auction for NFT trading to maximize the achievable revenue for sufficient liquidity in Web 3.0. In this way, the proposed framework can achieve proven security and sustainability for the next-generation decentralized digital society.
translated by 谷歌翻译
作为一个与现实世界互动的虚拟世界,元媒体封装了我们对下一代互联网的期望,同时带来了新的关键绩效指标(KPIS)。常规的超级可靠和低延迟通信(URLLC)可以满足绝大多数客观服务KPI,但是很难为用户提供个性化的荟萃服务体验。由于提高经验质量(QOE)可以被视为当务之急的KPI,因此URLLC朝向下一代URLLC(XURLLC),以支持基于图形技术的荟萃分析。通过将更多资源分配给用户更感兴趣的虚拟对象,可以实现更高的QoE。在本文中,我们研究了元服务提供商(MSP)和网络基础架构提供商(INP)之间的相互作用,以部署Metaverse Xurllc服务。提供了最佳合同设计框架。具体而言,将最大化的MSP的实用程序定义为元用户的QOE的函数,同时确保INP的激励措施。为了建模Metaverse Xurllc服务的Qoe,我们提出了一个名为Meta Immersion的新颖指标,该指标既包含了客观网络KPI和元用户的主观感觉。使用用户对象注意级别(UOAL)数据集,我们开发并验证了注意力吸引人的渲染能力分配方案以改善QOE。结果表明,与常规的URLLC相比,Xurllc平均提高了20.1%的QoE改善。当总资源有限时,QoE改进的比例较高,例如40%。
translated by 谷歌翻译
在支持计算和通信技术的支持下,元评估有望为用户带来前所未有的服务体验。但是,元用户数量的增加对网络资源的需求量很大,尤其是用于基于图形扩展现实并需要渲染大量虚拟对象的荟萃分析服务。为了有效利用网络资源并改善体验质量(QOE),我们设计了一个注意力吸引网络资源分配方案,以实现定制的元评估服务。目的是将更多的网络资源分配给用户更感兴趣的虚拟对象。我们首先讨论与荟萃服务有关的几种关键技术,包括QOE分析,眼睛跟踪和远程渲染。然后,我们查看现有的数据集,并提出用户对象注意级别(UOAL)数据集,该数据集包含30个用户对1,000张图像中96个对象的地面意义。提供有关如何使用UOAL的教程。在UOAL的帮助下,我们提出了一种注意力感知的网络资源分配算法,该算法有两个步骤,即注意力预测和QOE最大化。特别是,我们概述了两种类型的注意力预测方法的设计,即兴趣感知和时间感知预测。通过使用预测的用户对象 - 注意值,可以最佳分配边缘设备的渲染能力等网络资源以最大化QOE。最后,我们提出了与荟萃服务有关的有前途的研究指示。
translated by 谷歌翻译
超声(US)广泛用于实时成像,无辐射和便携性的优势。在临床实践中,分析和诊断通常依赖于美国序列,而不是单个图像来获得动态的解剖信息。对于新手来说,这是一项挑战,因为使用患者的足够视频进行练习是临床上不可行的。在本文中,我们提出了一个新颖的框架,以综合高保真美国视频。具体而言,合成视频是通过基于给定驾驶视频的动作来动画源内容图像来生成的。我们的亮点是三倍。首先,利用自我监督学习的优势,我们提出的系统以弱监督的方式进行了培训,以进行关键点检测。然后,这些关键点为处理美国视频中的复杂动态动作提供了重要信息。其次,我们使用双重解码器将内容和纹理学习解除,以有效地减少模型学习难度。最后,我们采用了对抗性训练策略,并采用了GAN损失,以进一步改善生成的视频的清晰度,从而缩小了真实和合成视频之间的差距。我们在具有高动态运动的大型内部骨盆数据集上验证我们的方法。广泛的评估指标和用户研究证明了我们提出的方法的有效性。
translated by 谷歌翻译
空中接入网络已被识别为各种事物互联网(物联网)服务和应用程序的重要驾驶员。特别是,以无人机互联网为中心的空中计算网络基础设施已经掀起了自动图像识别的新革命。这种新兴技术依赖于共享地面真理标记的无人机(UAV)群之间的数据,以培训高质量的自动图像识别模型。但是,这种方法将带来数据隐私和数据可用性挑战。为了解决这些问题,我们首先向一个半监督的联邦学习(SSFL)框架提供隐私保留的UAV图像识别。具体而言,我们提出了模型参数混合策略,以改善两个现实场景下的FL和半监督学习方法的天真组合(标签 - 客户端和标签 - 服务器),其被称为联合混合(FEDMIX)。此外,在不同环境中使用不同的相机模块,在不同环境中使用不同的相机模块,在不同的相机模块,即统计异质性,存在显着差异。为了减轻统计异质性问题,我们提出了基于客户参与训练的频率的聚合规则,即FedFReq聚合规则,可以根据其频率调整相应的本地模型的权重。数值结果表明,我们提出的方法的性能明显优于当前基线的性能,并且对不同的非IID等级的客户数据具有强大。
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译
Time-series anomaly detection is an important task and has been widely applied in the industry. Since manual data annotation is expensive and inefficient, most applications adopt unsupervised anomaly detection methods, but the results are usually sub-optimal and unsatisfactory to end customers. Weak supervision is a promising paradigm for obtaining considerable labels in a low-cost way, which enables the customers to label data by writing heuristic rules rather than annotating each instance individually. However, in the time-series domain, it is hard for people to write reasonable labeling functions as the time-series data is numerically continuous and difficult to be understood. In this paper, we propose a Label-Efficient Interactive Time-Series Anomaly Detection (LEIAD) system, which enables a user to improve the results of unsupervised anomaly detection by performing only a small amount of interactions with the system. To achieve this goal, the system integrates weak supervision and active learning collaboratively while generating labeling functions automatically using only a few labeled data. All of these techniques are complementary and can promote each other in a reinforced manner. We conduct experiments on three time-series anomaly detection datasets, demonstrating that the proposed system is superior to existing solutions in both weak supervision and active learning areas. Also, the system has been tested in a real scenario in industry to show its practicality.
translated by 谷歌翻译
This paper investigates the use of artificial neural networks (ANNs) to solve differential equations (DEs) and the construction of the loss function which meets both differential equation and its initial/boundary condition of a certain DE. In section 2, the loss function is generalized to $n^\text{th}$ order ordinary differential equation(ODE). Other methods of construction are examined in Section 3 and applied to three different models to assess their effectiveness.
translated by 谷歌翻译
Kernels are efficient in representing nonlocal dependence and they are widely used to design operators between function spaces. Thus, learning kernels in operators from data is an inverse problem of general interest. Due to the nonlocal dependence, the inverse problem can be severely ill-posed with a data-dependent singular inversion operator. The Bayesian approach overcomes the ill-posedness through a non-degenerate prior. However, a fixed non-degenerate prior leads to a divergent posterior mean when the observation noise becomes small, if the data induces a perturbation in the eigenspace of zero eigenvalues of the inversion operator. We introduce a data-adaptive prior to achieve a stable posterior whose mean always has a small noise limit. The data-adaptive prior's covariance is the inversion operator with a hyper-parameter selected adaptive to data by the L-curve method. Furthermore, we provide a detailed analysis on the computational practice of the data-adaptive prior, and demonstrate it on Toeplitz matrices and integral operators. Numerical tests show that a fixed prior can lead to a divergent posterior mean in the presence of any of the four types of errors: discretization error, model error, partial observation and wrong noise assumption. In contrast, the data-adaptive prior always attains posterior means with small noise limits.
translated by 谷歌翻译